Solid Particle Implementation for E-L Solver#1301
Solid Particle Implementation for E-L Solver#1301JA0307 wants to merge 1 commit intoMFlowCode:masterfrom
Conversation
a2bf4ac to
e527f8f
Compare
Claude Code ReviewIncremental review from: e527f8f Previously-flagged issues addressed in this update: New findings since last Claude review[HIGH] Declaration-after-executable-statement: Fortran syntax error in MPI builds The fix initializes bubs_glb = 0 ! ← executable statement
integer :: ierr ! ← local declaration after executable: Fortran syntax error
integer :: i, j, k, ...Per the Fortran standard, all declarations must precede the first executable statement in a scoping unit. Move Remaining from prior review (not re-examined here): |
Claude Code ReviewIncremental review from: 069237f Previously-flagged issues addressed or carried forward: The declaration-after-executable bug ( Findings since last Claude review[HIGH] Declaration-after-executable still present — Fortran standard violation This was flagged in the previous review and remains unfixed. In an MFC_SIMULATION + MFC_MPI build the preprocessor expands to: bubs_glb = 0 ! ← executable statement
! ...
integer :: ierr ! ← local declaration AFTER executable: illegal per Fortran standardFix: move [MEDIUM] In [MEDIUM]
real(wp), ..., intent(inout) :: vL_field
real(wp), ..., intent(inout) :: vR_fieldThe implementation only reads these arrays (computes [LOW] Bare integer literal
aTemp(l) = 2._wp*f_bTemp/(fmass_g + fmass_v) - 3*fV*fVel(l)/fRThe bare integer [LOW] Typo in NVTX range label:
call nvtxStartRange("BETA-COMM-SENDRECV-NO-RMDA")Should be Remaining from prior reviews (not re-examined here): |
Claude Code ReviewIncremental review from: 3c49f42 Previously-flagged issues addressed in this update: New findings since last Claude review[HIGH] Module-level This commit adds: ! Module level (line 73)
real(wp), allocatable, dimension(:, :, :) :: max_dtallocated conditionally in the initializer: if (cfl_dt) then
@:ALLOCATE(max_dt(0:m, 0:n, 0:p)) ! line 474
end ifbut Additionally, inside real(wp) :: max_dt ! local scalar — shadows the module-level 3-D arrayThe GPU parallel loop at lines 766-769 passes this local scalar to Fix: add Remaining from prior reviews (not re-examined here): |
Claude Code ReviewIncremental review from: 754cc32 Previously-flagged findings from last review: No new issues were found in the prior increment. All previously flagged issues (declaration-after-executable, New findings since last Claude review[MEDIUM] Optional
The [LOW] No-op assignments in The case default
do l = 1, num_dims
fVel(l) = fVel(l)
fPos(l) = fPos(l)
end doThese are no-ops. The block can be removed entirely or replaced with a comment. [LOW] Unconditional MPI broadcast of hardcoded-IC variables in pre_process
Remaining from prior reviews (not re-examined here): |
d973f42 to
eb03599
Compare
Claude Code ReviewIncremental review from: 0756c08 Previously-flagged issues addressed in this update: optional New findings since last Claude review[HIGH]
The finalizer currently only handles the pre-existing IB buffers: subroutine s_finalize_mpi_proxy_module()
#ifdef MFC_MPI
if (ib) then
@:DEALLOCATE(ib_buff_send, ib_buff_recv)
end if
#endif
end subroutine s_finalize_mpi_proxy_moduleAll nine new module-level allocatable variables leak their host and GPU device memory. Add matching [MEDIUM]
[LOW] Missing In impure subroutine s_add_particles_to_transfer_list(nBub, pos, posPrev, include_ghost)
real(wp), dimension(:, :) :: pos, posPrev ! no intent
integer :: bubID, nbub ! nbub == nBub (case-insensitive); no intent
Similarly, both [LOW] No-op Previously flagged; still present: case default
do l = 1, num_dims
fVel(l) = fVel(l)
fPos(l) = fPos(l)
end doThese are no-ops. The |
|
<!-- claude-review: thread=primary; reviewed_sha=f0440446198aca0c66b76e8e88c888a49965dea3; mode=incremental --> |
|
Claude Code Review Incremental review from: 82cd581 New findings since last Claude review:
|
53621c4 to
82cd581
Compare
82cd581 to
dad92d8
Compare
Description
This update expands upon the pre-existing (in development) E-L solver for bubble dynamics to include solid particle dynamics. This is in support of the PSAAP center, which requires the capability to model solid particle dynamics in MFC.
Type of change
Testing
The solver has been tested by running various 2D/3D problems involving fluid-particle interactions, such as spherical blasts surrounded by a layer of particles, shock-particle curtains, collision tests, etc.
The inputs to the EL solid particle solver have all been turned on/off to verify they work independent of each other, and together.
The code has been tested for CPU and GPU usage. The GPU usage has been tested on Tuolumne.
Two new files have been added:
m_particles_EL.fpp
m_particles_EL_kernels.fpp
File 1 has the main particle dynamics subroutines. This initializes the particles, computes fluid forces, coupling terms, computes collision forces, enforces boundary conditions, and writes the data for post-processing.
File 2 has the gaussian kernel projection code and the subroutine to compute the force on the particle due to the fluid. This compute the quasi-steady drag force, pressure gradient force, added mass force, stokes drag, gravitational force. Models for the quasi-steady drag are implemented here.
Checklist
See the developer guide for full coding standards.
GPU changes (expand if you modified
src/simulation/)